205 research outputs found

    Data analytics and algorithms in policing in England and Wales: Towards a new policy framework

    Get PDF
    RUSI was commissioned by the Centre for Data Ethics and Innovation (CDEI) to conduct an independent study into the use of data analytics by police forces in England and Wales, with a focus on algorithmic bias. The primary purpose of the project is to inform CDEI’s review of bias in algorithmic decision-making, which is focusing on four sectors, including policing, and working towards a draft framework for the ethical development and deployment of data analytics tools for policing. This paper focuses on advanced algorithms used by the police to derive insights, inform operational decision-making or make predictions. Biometric technology, including live facial recognition, DNA analysis and fingerprint matching, are outside the direct scope of this study, as are covert surveillance capabilities and digital forensics technology, such as mobile phone data extraction and computer forensics. However, because many of the policy issues discussed in this paper stem from general underlying data protection and human rights frameworks, these issues will also be relevant to other police technologies, and their use must be considered in parallel to the tools examined in this paper. The project involved engaging closely with senior police officers, government officials, academics, legal experts, regulatory and oversight bodies and civil society organisations. Sixty nine participants took part in the research in the form of semi-structured interviews, focus groups and roundtable discussions. The project has revealed widespread concern across the UK law enforcement community regarding the lack of official national guidance for the use of algorithms in policing, with respondents suggesting that this gap should be addressed as a matter of urgency. Any future policy framework should be principles-based and complement existing police guidance in a ‘tech-agnostic’ way. Rather than establishing prescriptive rules and standards for different data technologies, the framework should establish standardised processes to ensure that data analytics projects follow recommended routes for the empirical evaluation of algorithms within their operational context and evaluate the project against legal requirements and ethical standards. The new guidance should focus on ensuring multi-disciplinary legal, ethical and operational input from the outset of a police technology project; a standard process for model development, testing and evaluation; a clear focus on the human–machine interaction and the ultimate interventions a data driven process may inform; and ongoing tracking and mitigation of discrimination risk

    The long arm of the algorithm? Automated Facial Recognition as evidence and trigger for police intervention

    Get PDF
    Criminal law's efficient and accurate administration depends to a considerable extent on the ability of decision-makers to identify unique individuals, circumstances and events as instances of abstract terms (such as events raising ‘reasonable suspicion’) laid out in the legal framework. Automated Facial Recognition has the potential to revolutionise the identification process, facilitate crime detection, and eliminate misidentification of suspects. This paper commences from the recent decision regarding the deployment of AFR by South Wales Police in order to discuss the lack of underpinning conceptual framework pertinent to a broader consideration of AFR in other contexts. We conclude that the judgment does not give the green light to other fact sensitive deployments of AFR. We consider two of these: a) use of AFR as a trigger for intervention short of arrest; b) use of AFR in an evidential context in criminal proceedings. AFR may on the face of it appear objective and sufficient, but this is belied by the probabilistic nature of the output, and the building of certain values into the tool, raising questions as to the justifiability of regarding the tool's output as an ‘objective’ ground for reasonable suspicion. The means by which the identification took place must be disclosed to the defence, if Article 6 right to a fair trial is to be upheld, together with information regarding disregarded ‘matches’ and error rates and uncertainties of the system itself. Furthermore, AFR raises the risk that scientific or algorithmic findings could usurp the role of the legitimate decision-maker, necessitating the development of a framework to protect the position of the human with decision-making prerogative

    Artificial intelligence and UK national security: Policy considerations

    Get PDF
    RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security. The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data

    Towards a Trustworthy Coronavirus Contact Tracing App

    Get PDF
    The use of a coronavirus contact tracing app has not yet been demonstrated to be trustworthy, in terms of its purpose, reliability, effectiveness or potential harmfulness. Furthermore, the binary nature of its output must be addressed if trustworthiness is to be achieved

    Intelligence, policing and the use of algorithmic analysis: a freedom of information-based study

    Get PDF
    This article is an exploration of some of the legal, policy and practical issues of using what is termed as 'algorithmic analysis' of police intelligence in the UK today. This type of intelligence analysis is thought to facilitate accurate and 'predictive' policing planning, strategy and tactics. There are, however, ethical and legal issues around this growing policy stance - many of them predicated on concerns around privacy and potential discrimination. To gain a better understanding of these issues, as they are currently developing, a freedom of information (FOI) request was sent in several parts to all police forces in the UK: i) seeking to establish the extent to which algorithmic analysis of intelligence is currently used in UK policing; ii) investigating the handling of intelligence by police forces; and iii) reviewing how the police in the UK regulate and monitor disciplinary issues around the handling of intelligence. We gained a partial picture of these issues - since there are some methodological limitations to FOI-based studies - but the responses to our FOI request revealed enough disparities and differences in developing practice to suggest a number of recommendations regarding the legality, accountability and transparency of 'algorithmic' police intelligence analysis

    Small Universal Antiport P Systems and Universal Multiset Grammars

    Get PDF
    Based on the construction of a universal register machine we construct a universal antiport P system working with 31 rules in the maximally parallel mode in one membrane, and a universal antiport P system with forbidden context working with 16 rules in the sequential derivation mode in one membrane for computing any partial recursive function on the set of natural numbers. For accepting/generating any arbitrary recursively enumerable set of natural numbers we need 31/33 and 16/18 rules, respectively. As a consequence of the result for antiport P systems with forbidden context we immediately infer similar results for forbidden random context multiset grammars with arbitrary rules

    P Systems with Antiport Rules for Evolution Rules

    Get PDF
    We investigate a variant of evolution-communication P systems where the computation is performed in two substeps. First, all possible an- tiport rules are applied in a non-deterministic, maximally parallel way, moving evolution rules across membranes. In the second substep, evolution rules are applied to suitable objects in a maximally parallel way, too. Thus, objects can be the subject of change, but are never moved themselves. As result of a halt- ing computation, we consider the multiset of objects present in a designated output membrane. When using catalytic evolution rules, we already obtain universal computational power with only one catalyst and one membrane. For systems without catalysts we obtain a characterization of the Parikh images of ET0L languages

    'Being on our rader does not necessarily mean being under our microscope': the regulation and retention of police intelligence

    Get PDF
    The doctrinal development as a prompt for this piece was the recent decision in the Supreme Court in the case of R (Catt and T) v Secretary of State for the Home Department [2015] UKSC 9. This failure of two conjoined attempts through judicial review to have information thoroughly deleted from police databases, in relation to two rather different sets of circumstances and policy pressures, is a landmark judgment in the surveillance and privacy law field as a whole. The judgment of the Supreme Court in Catt is discussed in detail in a later Part of this piece. Suffice to say, for a quick introduction to the case, that Catt in the Supreme Court was a successful appeal by the Metropolitan Police, in the form of a 4-1 split judgment in their favour. John Catt ultimately lost his case after seeking the deletion of text-based records from databases operated by a police unit with a national anti-extremism intelligence remit. In Part 1 of this piece we aim to give an introduction to some of the tensions that arise between privacy and human rights, versus police efficacy and operational pressures, in the context of intelligence gathering, analysis and retention. In Part 2, we move on to consider the concepts of information and analysis as comprising 'police intelligence' found in some of the academic literature as it currently exists on the topic. Part 3 of this piece in turn provides a short review of concepts and practices in relation to electronic intelligence databases in operation by the police more broadly, and across different police jurisdictions and cultures; while Part 4 see us address the police use of intelligence databases in the UK context more specifically. Parts 5, 6 and 7 in turn see us address and analyse police intelligence retention and regulation as a human rights issue; the recent Supreme Court decision in Catt; and then give a broader commentary and critique of the Supreme Court decision in Catt. In Part 8 of this piece we evaluate some recent changes to guidance on the retention and deletion of police records made following the decision by the Supreme Court in Catt. Concluding our piece, in Part 9, we highlight some conclusions and recommendations we feel we can make as to i) the shifting and advancing realities of police database technology and surveillance through electronic records generally, ii) definitional and doctrinal problems presented by varying concepts in play with regard to what is meant by 'police intelligence', and iii) the need for a single national regulator in the field of police intelligence more generally

    Data Analytics and Algorithmic Bias in Policing

    Get PDF
    This paper summarises the use of analytics and algorithms for policing within England and Wales, and explores different types of bias that can arise during the product lifecycle. The findings are based on in-depth consultation with police forces, civil society organisations, academics and legal experts. The primary purpose of the project is to inform the Centre for Data Ethics and Innovation's ongoing review into algorithmic bias in the policing sector. RUSI’s second and final report for this project will be published in early 2020, to include specific recommendations for the final Code of Practice, and incorporating the cumulative feedback received during the consultation process

    Editorial

    Get PDF
    • …
    corecore